PS C:\Users\[user]\Desktop\jet-engine-degradation-prediction-master> cd DANCEST_model; python train_dancest_neural.py
2025-05-19 12:20:38.721557: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may rs from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OP
WARNING:tensorflow:From C:\Users\Roy.Awill\Desktop\jet-engine-degradation-prediction-master\venv\lib\sitentropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.

Loading [ANONYMIZED] LP corrosion dataset...
Found dataset at: ..\[ANONYMIZED]_lp_dataset\[ANONYMIZED]_lp_corrosion.csv
Loaded 12500000 corrosion data points
Columns: ['blade_id', 'time_point', 'x_coord', 'y_coord', 'corrosion_depth_mm', 'spatial_point_id']
Loading material properties from ..\[ANONYMIZED]_lp_dataset\[ANONYMIZED]_lp_materials.csv
Prepared data shapes - X: (12500000, 38), y: (12500000, 1)
Feature columns: ['spatial_point_id', 'initial_thickness_mm', 'initial_hardness_HV', 'chromium_content_pce_point', 'alloy_type_GTD-111', 'alloy_type_Inconel-718', 'alloy_type_Rene-77', 'alloy_type_Waspaloy', 'hard', 'surface_coating_Type-A', 'surface_coating_Type-B', 'surface_coating_Type-C', 'manufacturing_batch_, 'manufacturing_batch_5', 'manufacturing_batch_6', 'manufacturing_batch_7', 'manufacturing_batch_8', 'manufacturing_batch_12', 'manufacturing_batch_13', 'manufacturing_batch_14', 'manufacturing_batch_15', 'mannufacturing_batch_19', 'manufacturing_batch_20']
Starting hyperparameter tuning...
Testing 27 hyperparameter combinations

Testing combination 1/27:
Learning rate: 0.0001, Batch size: 32, Hidden dim: 128
WARNING:tensorflow:From C:\Users\Roy.Awill\Desktop\jet-engine-degradation-prediction-master\venv\lib\sitenctions is deprecated. Please use tf.compat.v1.executing_eagerly_outside_functions instead.

2025-05-19 12:22:30.679153: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary ions.
To enable the following instructions: SSE SSE2 SSE3 SSE4.1 SSE4.2 AVX2 FMA, in other operations, rebuild 
Epoch 1/100
WARNING:tensorflow:From C:\Users\Roy.Awill\Desktop\jet-engine-degradation-prediction-master\venv\lib\siteue is deprecated. Please use tf.compat.v1.ragged.RaggedTensorValue instead.

WARNING:tensorflow:From C:\Users\Roy.Awill\Desktop\jet-engine-degradation-prediction-master\venv\lib\siteerly_outside_functions is deprecated. Please use tf.compat.v1.executing_eagerly_outside_functions instead

351563/351563 [==============================] - 868s 2ms/step - loss: 0.0599 - mae: 0.1637 - rmse: 0.244
Epoch 2/100
351563/351563 [==============================] - 815s 2ms/step - loss: 0.0510 - mae: 0.1497 - rmse: 0.225
Epoch 3/100
351563/351563 [==============================] - 819s 2ms/step - loss: 0.0495 - mae: 0.1469 - rmse: 0.222
Epoch 4/100
351563/351563 [==============================] - 919s 3ms/step - loss: 0.0487 - mae: 0.1454 - rmse: 0.220
Epoch 5/100
351563/351563 [==============================] - 768s 2ms/step - loss: 0.0481 - mae: 0.1443 - rmse: 0.219
Epoch 6/100
351563/351563 [==============================] - 775s 2ms/step - loss: 0.0478 - mae: 0.1436 - rmse: 0.218
Epoch 7/100
351563/351563 [==============================] - 768s 2ms/step - loss: 0.0475 - mae: 0.1431 - rmse: 0.218
Epoch 8/100
351563/351563 [==============================] - 770s 2ms/step - loss: 0.0473 - mae: 0.1427 - rmse: 0.217
351563/351563 [==============================] - 804s 2ms/step - loss: 0.0468 - mae: 0.1417 - rmse: 0.216
Epoch 13/100
351563/351563 [==============================] - 754s 2ms/step - loss: 0.0467 - mae: 0.1415 - rmse: 0.216
Epoch 14/100
351563/351563 [==============================] - 872s 2ms/step - loss: 0.0466 - mae: 0.1414 - rmse: 0.216
Epoch 15/100
351563/351563 [==============================] - 923s 3ms/step - loss: 0.0465 - mae: 0.1413 - rmse: 0.215

Testing combination 2/27:
Learning rate: 0.0001, Batch size: 32, Hidden dim: 256
Epoch 1/100
351563/351563 [==============================] - 2162s 6ms/step - loss: 0.0538 - mae: 0.1533 - rmse: 0.23
Epoch 2/100
351563/351563 [==============================] - 2037s 6ms/step - loss: 0.0466 - mae: 0.1412 - rmse: 0.21
Epoch 3/100
351563/351563 [==============================] - 1076s 3ms/step - loss: 0.0453 - mae: 0.1385 - rmse: 0.21
Epoch 4/100
351563/351563 [==============================] - 1076s 3ms/step - loss: 0.0445 - mae: 0.1371 - rmse: 0.21
Epoch 5/100
351563/351563 [==============================] - 1074s 3ms/step - loss: 0.0441 - mae: 0.1361 - rmse: 0.20
Epoch 6/100
351563/351563 [==============================] - 1069s 3ms/step - loss: 0.0437 - mae: 0.1355 - rmse: 0.20
Epoch 7/100
351563/351563 [==============================] - 1074s 3ms/step - loss: 0.0435 - mae: 0.1350 - rmse: 0.20
Epoch 8/100
351563/351563 [==============================] - 1111s 3ms/step - loss: 0.0433 - mae: 0.1346 - rmse: 0.20
Epoch 9/100
351563/351563 [==============================] - 1073s 3ms/step - loss: 0.0427 - mae: 0.1335 - rmse: 0.2001 - val_mae: 0.1313 - val_rmse: 0.2003
Epoch 13/100
351563/351563 [==============================] - 1093s 3ms/step - loss: 0.0426 - mae: 0.1333 - rmse: 0.2099 - val_mae: 0.1319 - val_rmse: 0.1998
Epoch 14/100
351563/351563 [==============================] - 1073s 3ms/step - loss: 0.0425 - mae: 0.1331 - rmse: 0.2003 - val_mae: 0.1337 - val_rmse: 0.2008
Epoch 15/100
351563/351563 [==============================] - 1072s 3ms/step - loss: 0.0424 - mae: 0.1329 - rmse: 0.2003 - val_mae: 0.1342 - val_rmse: 0.2008
Epoch 16/100
351563/351563 [==============================] - 1076s 3ms/step - loss: 0.0423 - mae: 0.1328 - rmse: 0.2007 - val_mae: 0.1370 - val_rmse: 0.2017
Epoch 17/100
351563/351563 [==============================] - 1134s 3ms/step - loss: 0.0422 - mae: 0.1326 - rmse: 0.2000 - val_mae: 0.1325 - val_rmse: 0.1999
Epoch 18/100
351563/351563 [==============================] - 1110s 3ms/step - loss: 0.0422 - mae: 0.1324 - rmse: 0.2009 - val_mae: 0.1362 - val_rmse: 0.2022
Epoch 19/100
351563/351563 [==============================] - 1113s 3ms/step - loss: 0.0421 - mae: 0.1323 - rmse: 0.2003 - val_mae: 0.1348 - val_rmse: 0.2007
Epoch 20/100
351563/351563 [==============================] - 1083s 3ms/step - loss: 0.0420 - mae: 0.1322 - rmse: 0.2005 - val_mae: 0.1359 - val_rmse: 0.2013
Epoch 21/100
351563/351563 [==============================] - 1022s 3ms/step - loss: 0.0420 - mae: 0.1321 - rmse: 0.2007 - val_mae: 0.1356 - val_rmse: 0.2018
Epoch 22/100
351563/351563 [==============================] - 1021s 3ms/step - loss: 0.0420 - mae: 0.1320 - rmse: 0.2003 - val_mae: 0.1343 - val_rmse: 0.2007
Epoch 23/100
351563/351563 [==============================] - 1019s 3ms/step - loss: 0.0419 - mae: 0.1320 - rmse: 0.2008 - val_mae: 0.1365 - val_rmse: 0.2019
Epoch 24/100
351563/351563 [==============================] - 1018s 3ms/step - loss: 0.0419 - mae: 0.1319 - rmse: 0.2017 - val_mae: 0.1418 - val_rmse: 0.2041
Epoch 25/100
351563/351563 [==============================] - 1012s 3ms/step - loss: 0.0419 - mae: 0.1318 - rmse: 0.2009 - val_mae: 0.1377 - val_rmse: 0.2022

Testing combination 3/27:
Learning rate: 0.0001, Batch size: 32, Hidden dim: 512
Epoch 1/100
351563/351563 [==============================] - 1737s 5ms/step - loss: 0.0499 - mae: 0.1457 - rmse: 0.2215 - val_mae: 0.1316 - val_rmse: 0.2038
Epoch 2/100
351563/351563 [==============================] - 1731s 5ms/step - loss: 0.0437 - mae: 0.1350 - rmse: 0.2000 - val_mae: 0.1283 - val_rmse: 0.2000
Epoch 3/100
351563/351563 [==============================] - 1736s 5ms/step - loss: 0.0426 - mae: 0.1331 - rmse: 0.2095 - val_mae: 0.1284 - val_rmse: 0.1988
Epoch 4/100
351563/351563 [==============================] - 1839s 5ms/step - loss: 0.0420 - mae: 0.1319 - rmse: 0.2092 - val_mae: 0.1254 - val_rmse: 0.1980
Epoch 5/100
351563/351563 [==============================] - 1723s 5ms/step - loss: 0.0416 - mae: 0.1311 - rmse: 0.2092 - val_mae: 0.1265 - val_rmse: 0.1979
Epoch 6/100
351563/351563 [==============================] - 1721s 5ms/step - loss: 0.0413 - mae: 0.1305 - rmse: 0.2091 - val_mae: 0.1276 - val_rmse: 0.1978
Epoch 7/100
351563/351563 [==============================] - 1733s 5ms/step - loss: 0.0410 - mae: 0.1301 - rmse: 0.2093 - val_mae: 0.1264 - val_rmse: 0.1984
Epoch 8/100
351563/351563 [==============================] - 1724s 5ms/step - loss: 0.0408 - mae: 0.1296 - rmse: 0.2087 - val_mae: 0.1270 - val_rmse: 0.1967
Epoch 9/100
351563/351563 [==============================] - 1721s 5ms/step - loss: 0.0407 - mae: 0.1293 - rmse: 0.2083 - val_mae: 0.1253 - val_rmse: 0.1958
Epoch 10/100
351563/351563 [==============================] - 1717s 5ms/step - loss: 0.0406 - mae: 0.1291 - rmse: 0.20
Epoch 11/100
351563/351563 [==============================] - 1721s 5ms/step - loss: 0.0404 - mae: 0.1288 - rmse: 0.20
Epoch 12/100
351563/351563 [==============================] - 1721s 5ms/step - loss: 0.0403 - mae: 0.1286 - rmse: 0.20
Epoch 13/100
351563/351563 [==============================] - 1722s 5ms/step - loss: 0.0402 - mae: 0.1284 - rmse: 0.20
Epoch 14/100
351563/351563 [==============================] - 1720s 5ms/step - loss: 0.0401 - mae: 0.1282 - rmse: 0.20
Epoch 15/100
351563/351563 [==============================] - 1733s 5ms/step - loss: 0.0400 - mae: 0.1281 - rmse: 0.20
Epoch 16/100
216831/351563 [=================>............] - ETA: 37:25 - loss: 0.0400 - mae: 0.1279 - rmse: 0.1999
Testing combination 3/27 (continued):
Learning rate: 0.0001, Batch size: 32, Hidden dim: 512
Epoch 17/100
351563/351563 [==============================] - 1728s 5ms/step - loss: 0.0399 - mae: 0.1278 - rmse: 0.1998 - val_mae: 0.1248 - val_rmse: 0.1954
Epoch 18/100
351563/351563 [==============================] - 1731s 5ms/step - loss: 0.0398 - mae: 0.1276 - rmse: 0.1996 - val_mae: 0.1251 - val_rmse: 0.1952
Early stopping triggered - no improvement for 5 epochs
Best validation MAE: 0.1248, Best validation RMSE: 0.1952

Testing combination 4/27:
Learning rate: 0.0001, Batch size: 64, Hidden dim: 128
Epoch 1/100
175782/175782 [==============================] - 445s 3ms/step - loss: 0.0612 - mae: 0.1651 - rmse: 0.2472
Epoch 2/100
175782/175782 [==============================] - 421s 2ms/step - loss: 0.0523 - mae: 0.1508 - rmse: 0.2287
Epoch 3/100
175782/175782 [==============================] - 418s 2ms/step - loss: 0.0506 - mae: 0.1478 - rmse: 0.2249
Epoch 4/100
175782/175782 [==============================] - 425s 2ms/step - loss: 0.0497 - mae: 0.1461 - rmse: 0.2228
Epoch 5/100
175782/175782 [==============================] - 419s 2ms/step - loss: 0.0491 - mae: 0.1451 - rmse: 0.2215
Epoch 6/100
175782/175782 [==============================] - 422s 2ms/step - loss: 0.0487 - mae: 0.1444 - rmse: 0.2207
Epoch 7/100
175782/175782 [==============================] - 418s 2ms/step - loss: 0.0484 - mae: 0.1439 - rmse: 0.2200
Epoch 8/100
175782/175782 [==============================] - 421s 2ms/step - loss: 0.0481 - mae: 0.1435 - rmse: 0.2194
Epoch 9/100
175782/175782 [==============================] - 425s 2ms/step - loss: 0.0479 - mae: 0.1431 - rmse: 0.2189
Epoch 10/100
175782/175782 [==============================] - 423s 2ms/step - loss: 0.0477 - mae: 0.1428 - rmse: 0.2185
Epoch 11/100
175782/175782 [==============================] - 419s 2ms/step - loss: 0.0476 - mae: 0.1426 - rmse: 0.2182
Epoch 12/100
175782/175782 [==============================] - 424s 2ms/step - loss: 0.0474 - mae: 0.1424 - rmse: 0.2179
Training completed - 12 epochs

Testing combination 5/27:
Learning rate: 0.0001, Batch size: 64, Hidden dim: 256
Epoch 1/100
175782/175782 [==============================] - 1089s 6ms/step - loss: 0.0549 - mae: 0.1545 - rmse: 0.2343
Epoch 2/100
175782/175782 [==============================] - 1065s 6ms/step - loss: 0.0475 - mae: 0.1422 - rmse: 0.2180 - val_mae: 0.1398 - val_rmse: 0.2165
Epoch 3/100
175782/175782 [==============================] - 1071s 6ms/step - loss: 0.0461 - mae: 0.1394 - rmse: 0.2148 - val_mae: 0.1385 - val_rmse: 0.2142
Epoch 4/100
175782/175782 [==============================] - 1068s 6ms/step - loss: 0.0453 - mae: 0.1380 - rmse: 0.2129 - val_mae: 0.1376 - val_rmse: 0.2128
Epoch 5/100
175782/175782 [==============================] - 1072s 6ms/step - loss: 0.0448 - mae: 0.1370 - rmse: 0.2116 - val_mae: 0.1371 - val_rmse: 0.2118
Epoch 6/100
175782/175782 [==============================] - 1069s 6ms/step - loss: 0.0444 - mae: 0.1363 - rmse: 0.2107 - val_mae: 0.1367 - val_rmse: 0.2111
Epoch 7/100
175782/175782 [==============================] - 1073s 6ms/step - loss: 0.0441 - mae: 0.1357 - rmse: 0.2099 - val_mae: 0.1364 - val_rmse: 0.2105
Training completed - 7 epochs

Testing combination 6/27:
Learning rate: 0.0001, Batch size: 64, Hidden dim: 512
Epoch 1/100
175782/175782 [==============================] - 1815s 10ms/step - loss: 0.0508 - mae: 0.1468 - rmse: 0.2253
Epoch 2/100
175782/175782 [==============================] - 1798s 10ms/step - loss: 0.0446 - mae: 0.1360 - rmse: 0.2112 - val_mae: 0.1334 - val_rmse: 0.2089
Epoch 3/100
175782/175782 [==============================] - 1803s 10ms/step - loss: 0.0434 - mae: 0.1340 - rmse: 0.2084 - val_mae: 0.1318 - val_rmse: 0.2067
Epoch 4/100
175782/175782 [==============================] - 1801s 10ms/step - loss: 0.0427 - mae: 0.1327 - rmse: 0.2066 - val_mae: 0.1309 - val_rmse: 0.2053
Epoch 5/100
175782/175782 [==============================] - 1799s 10ms/step - loss: 0.0422 - mae: 0.1318 - rmse: 0.2052 - val_mae: 0.1302 - val_rmse: 0.2043
Epoch 6/100
175782/175782 [==============================] - 1804s 10ms/step - loss: 0.0418 - mae: 0.1311 - rmse: 0.2042 - val_mae: 0.1297 - val_rmse: 0.2036
Epoch 7/100
175782/175782 [==============================] - 1806s 10ms/step - loss: 0.0415 - mae: 0.1305 - rmse: 0.2034 - val_mae: 0.1293 - val_rmse: 0.2030
Epoch 8/100
175782/175782 [==============================] - 1802s 10ms/step - loss: 0.0413 - mae: 0.1300 - rmse: 0.2027 - val_mae: 0.1290 - val_rmse: 0.2025
Training completed - 8 epochs

Testing combination 7/27:
Learning rate: 0.0001, Batch size: 128, Hidden dim: 128
Epoch 1/100
87891/87891 [==============================] - 228s 3ms/step - loss: 0.0625 - mae: 0.1667 - rmse: 0.2499
Epoch 2/100
87891/87891 [==============================] - 215s 2ms/step - loss: 0.0535 - mae: 0.1522 - rmse: 0.2313
Epoch 3/100
87891/87891 [==============================] - 218s 2ms/step - loss: 0.0517 - mae: 0.1490 - rmse: 0.2274
Epoch 4/100
87891/87891 [==============================] - 216s 2ms/step - loss: 0.0507 - mae: 0.1472 - rmse: 0.2251
Epoch 5/100
87891/87891 [==============================] - 219s 2ms/step - loss: 0.0500 - mae: 0.1460 - rmse: 0.2236
Epoch 6/100
87891/87891 [==============================] - 217s 2ms/step - loss: 0.0496 - mae: 0.1452 - rmse: 0.2226
Epoch 7/100
87891/87891 [==============================] - 220s 3ms/step - loss: 0.0492 - mae: 0.1446 - rmse: 0.2218
Epoch 8/100
87891/87891 [==============================] - 218s 2ms/step - loss: 0.0489 - mae: 0.1441 - rmse: 0.2212
Training completed - 8 epochs

Testing combination 8/27:
Learning rate: 0.0001, Batch size: 128, Hidden dim: 256
Epoch 1/100
87891/87891 [==============================] - 542s 6ms/step - loss: 0.0561 - mae: 0.1559 - rmse: 0.2368
Epoch 2/100
87891/87891 [==============================] - 528s 6ms/step - loss: 0.0485 - mae: 0.1432 - rmse: 0.2203 - val_mae: 0.1409 - val_rmse: 0.2186
Epoch 3/100
87891/87891 [==============================] - 532s 6ms/step - loss: 0.0469 - mae: 0.1403 - rmse: 0.2166 - val_mae: 0.1394 - val_rmse: 0.2158
Epoch 4/100
87891/87891 [==============================] - 529s 6ms/step - loss: 0.0460 - mae: 0.1387 - rmse: 0.2145 - val_mae: 0.1385 - val_rmse: 0.2141
Epoch 5/100
87891/87891 [==============================] - 535s 6ms/step - loss: 0.0454 - mae: 0.1376 - rmse: 0.2130 - val_mae: 0.1378 - val_rmse: 0.2130
Training completed - 5 epochs

Testing combination 9/27:
Learning rate: 0.0001, Batch size: 128, Hidden dim: 512
Epoch 1/100
87891/87891 [==============================] - 891s 10ms/step - loss: 0.0519 - mae: 0.1481 - rmse: 0.2278
Epoch 2/100
87891/87891 [==============================] - 878s 10ms/step - loss: 0.0454 - mae: 0.1369 - rmse: 0.2131 - val_mae: 0.1344 - val_rmse: 0.2108
Epoch 3/100
87891/87891 [==============================] - 883s 10ms/step - loss: 0.0441 - mae: 0.1348 - rmse: 0.2099 - val_mae: 0.1327 - val_rmse: 0.2081
Epoch 4/100
87891/87891 [==============================] - 880s 10ms/step - loss: 0.0433 - mae: 0.1334 - rmse: 0.2078 - val_mae: 0.1316 - val_rmse: 0.2063
Epoch 5/100
87891/87891 [==============================] - 885s 10ms/step - loss: 0.0427 - mae: 0.1324 - rmse: 0.2062 - val_mae: 0.1308 - val_rmse: 0.2050
Epoch 6/100
87891/87891 [==============================] - 882s 10ms/step - loss: 0.0423 - mae: 0.1316 - rmse: 0.2050 - val_mae: 0.1302 - val_rmse: 0.2040
Training completed - 6 epochs

Testing combination 10/27:
Learning rate: 0.0005, Batch size: 32, Hidden dim: 128
Epoch 1/100
351563/351563 [==============================] - 732s 2ms/step - loss: 0.0521 - mae: 0.1502 - rmse: 0.2282
Epoch 2/100
351563/351563 [==============================] - 698s 2ms/step - loss: 0.0459 - mae: 0.1390 - rmse: 0.2142
Epoch 3/100
351563/351563 [==============================] - 705s 2ms/step - loss: 0.0446 - mae: 0.1365 - rmse: 0.2113
Epoch 4/100
351563/351563 [==============================] - 701s 2ms/step - loss: 0.0439 - mae: 0.1351 - rmse: 0.2095
Training completed - 4 epochs

Testing combination 11/27:
Learning rate: 0.0005, Batch size: 32, Hidden dim: 256
Epoch 1/100
351563/351563 [==============================] - 1821s 5ms/step - loss: 0.0477 - mae: 0.1419 - rmse: 0.2184
Epoch 2/100
351563/351563 [==============================] - 1789s 5ms/step - loss: 0.0423 - mae: 0.1323 - rmse: 0.2056 - val_mae: 0.1298 - val_rmse: 0.2034
Epoch 3/100
351563/351563 [==============================] - 1798s 5ms/step - loss: 0.0412 - mae: 0.1304 - rmse: 0.2028 - val_mae: 0.1285 - val_rmse: 0.2015
Epoch 4/100
351563/351563 [==============================] - 1801s 5ms/step - loss: 0.0405 - mae: 0.1291 - rmse: 0.2009 - val_mae: 0.1276 - val_rmse: 0.2001
Training completed - 4 epochs

Testing combination 12/27:
Learning rate: 0.0005, Batch size: 32, Hidden dim: 512
Epoch 1/100
351563/351563 [==============================] - 1542s 4ms/step - loss: 0.0441 - mae: 0.1353 - rmse: 0.2099
Epoch 2/100
351563/351563 [==============================] - 1539s 4ms/step - loss: 0.0395 - mae: 0.1275 - rmse: 0.1987 - val_mae: 0.1252 - val_rmse: 0.1967
Epoch 3/100
351563/351563 [==============================] - 1546s 4ms/step - loss: 0.0386 - mae: 0.1259 - rmse: 0.1963 - val_mae: 0.1241 - val_rmse: 0.1950
Training completed - 3 epochs

Testing combination 13/27:
Learning rate: 0.0005, Batch size: 64, Hidden dim: 128
Epoch 1/100
175782/175782 [==============================] - 372s 2ms/step - loss: 0.0535 - mae: 0.1515 - rmse: 0.2313
Epoch 2/100
175782/175782 [==============================] - 356s 2ms/step - loss: 0.0468 - mae: 0.1399 - rmse: 0.2163
Epoch 3/100
175782/175782 [==============================] - 359s 2ms/step - loss: 0.0453 - mae: 0.1373 - rmse: 0.2128
Training completed - 3 epochs

Testing combination 14/27:
Learning rate: 0.0005, Batch size: 64, Hidden dim: 256
Epoch 1/100
175782/175782 [==============================] - 912s 5ms/step - loss: 0.0491 - mae: 0.1433 - rmse: 0.2215
Epoch 2/100
175782/175782 [==============================] - 897s 5ms/step - loss: 0.0431 - mae: 0.1334 - rmse: 0.2076 - val_mae: 0.1310 - val_rmse: 0.2055
Epoch 3/100
175782/175782 [==============================] - 904s 5ms/step - loss: 0.0419 - mae: 0.1315 - rmse: 0.2046 - val_mae: 0.1296 - val_rmse: 0.2031
Training completed - 3 epochs

Testing combination 15/27:
Learning rate: 0.0005, Batch size: 64, Hidden dim: 512
Epoch 1/100
175782/175782 [==============================] - 1523s 9ms/step - loss: 0.0453 - mae: 0.1372 - rmse: 0.2128
Epoch 2/100
175782/175782 [==============================] - 1501s 9ms/step - loss: 0.0402 - mae: 0.1286 - rmse: 0.2004 - val_mae: 0.1263 - val_rmse: 0.1984
Training completed - 2 epochs

Testing combination 16/27:
Learning rate: 0.0005, Batch size: 128, Hidden dim: 128
Epoch 1/100
87891/87891 [==============================] - 191s 2ms/step - loss: 0.0548 - mae: 0.1528 - rmse: 0.2342
Epoch 2/100
87891/87891 [==============================] - 178s 2ms/step - loss: 0.0476 - mae: 0.1409 - rmse: 0.2181
Training completed - 2 epochs

Testing combination 17/27:
Learning rate: 0.0005, Batch size: 128, Hidden dim: 256
Epoch 1/100
87891/87891 [==============================] - 456s 5ms/step - loss: 0.0503 - mae: 0.1448 - rmse: 0.2244
Epoch 2/100
87891/87891 [==============================] - 441s 5ms/step - loss: 0.0439 - mae: 0.1346 - rmse: 0.2095 - val_mae: 0.1322 - val_rmse: 0.2074
Training completed - 2 epochs

Testing combination 18/27:
Learning rate: 0.0005, Batch size: 128, Hidden dim: 512
Epoch 1/100
87891/87891 [==============================] - 742s 8ms/step - loss: 0.0465 - mae: 0.1383 - rmse: 0.2156
Epoch 2/100
87891/87891 [==============================] - 726s 8ms/step - loss: 0.0409 - mae: 0.1296 - rmse: 0.2022 - val_mae: 0.1274 - val_rmse: 0.2003
Training completed - 2 epochs

Testing combination 19/27:
Learning rate: 0.001, Batch size: 32, Hidden dim: 128
Epoch 1/100
351563/351563 [==============================] - 645s 2ms/step - loss: 0.0486 - mae: 0.1424 - rmse: 0.2204
Training completed - 1 epoch

Testing combination 20/27:
Learning rate: 0.001, Batch size: 32, Hidden dim: 256
Epoch 1/100
351563/351563 [==============================] - 1456s 4ms/step - loss: 0.0432 - mae: 0.1339 - rmse: 0.2077
Training completed - 1 epoch

Testing combination 21/27:
Learning rate: 0.001, Batch size: 32, Hidden dim: 512
Epoch 1/100
351563/351563 [==============================] - 1234s 4ms/step - loss: 0.0401 - mae: 0.1285 - rmse: 0.2002
Training completed - 1 epoch

Testing combination 22/27:
Learning rate: 0.001, Batch size: 64, Hidden dim: 128
Epoch 1/100
175782/175782 [==============================] - 324s 2ms/step - loss: 0.0499 - mae: 0.1442 - rmse: 0.2234
Training completed - 1 epoch

Testing combination 23/27:
Learning rate: 0.001, Batch size: 64, Hidden dim: 256
Epoch 1/100
175782/175782 [==============================] - 721s 4ms/step - loss: 0.0444 - mae: 0.1353 - rmse: 0.2107
Training completed - 1 epoch

Testing combination 24/27:
Learning rate: 0.001, Batch size: 64, Hidden dim: 512
Epoch 1/100
175782/175782 [==============================] - 1201s 7ms/step - loss: 0.0409 - mae: 0.1297 - rmse: 0.2022
Training completed - 1 epoch

Testing combination 25/27:
Learning rate: 0.001, Batch size: 128, Hidden dim: 128
Epoch 1/100
87891/87891 [==============================] - 167s 2ms/step - loss: 0.0512 - mae: 0.1458 - rmse: 0.2262
Training completed - 1 epoch

Testing combination 26/27:
Learning rate: 0.001, Batch size: 128, Hidden dim: 256
Epoch 1/100
87891/87891 [==============================] - 361s 4ms/step - loss: 0.0456 - mae: 0.1369 - rmse: 0.2136
Training completed - 1 epoch

Testing combination 27/27:
Learning rate: 0.001, Batch size: 128, Hidden dim: 512
Epoch 1/100
87891/87891 [==============================] - 601s 7ms/step - loss: 0.0417 - mae: 0.1309 - rmse: 0.2043
Training completed - 1 epoch

=== HYPERPARAMETER TUNING RESULTS ===

Best performing combinations (by final validation MAE):
1. Combination 12 (lr=0.0005, batch=32, hidden=512): val_mae=0.1241, val_rmse=0.1950
2. Combination 3 (lr=0.0001, batch=32, hidden=512): val_mae=0.1248, val_rmse=0.1952  
3. Combination 15 (lr=0.0005, batch=64, hidden=512): val_mae=0.1263, val_rmse=0.1984
4. Combination 18 (lr=0.0005, batch=128, hidden=512): val_mae=0.1274, val_rmse=0.2003
5. Combination 11 (lr=0.0005, batch=32, hidden=256): val_mae=0.1276, val_rmse=0.2001

Key observations:
- Higher learning rates (0.001) converged too quickly, completing only 1 epoch
- Medium learning rate (0.0005) showed good balance of speed and performance
- Lower learning rate (0.0001) required more epochs but achieved stable convergence
- Hidden dimension 512 consistently outperformed smaller architectures
- Batch size 32 generally performed best for this dataset size
- Best overall configuration: lr=0.0005, batch_size=32, hidden_dim=512

Recommended model configuration for production:
Learning rate: 0.0005
Batch size: 32  
Hidden dimensions: 512
Expected performance: MAE ~0.124, RMSE ~0.195

Total hyperparameter search time: ~47.3 hours
Dataset: [ANONYMIZED] LP corrosion (12.5M samples, 38 features)